6 research outputs found

    sEMG-based hand gesture recognition with deep learning

    Get PDF
    Hand gesture recognition based on surface electromyographic (sEMG) signals is a promising approach for the development of Human-Machine Interfaces (HMIs) with a natural control, such as intuitive robot interfaces or poly-articulated prostheses. However, real-world applications are limited by reliability problems due to motion artifacts, postural and temporal variability, and sensor re-positioning. This master thesis is the first application of deep learning on the Unibo-INAIL dataset, the first public sEMG dataset exploring the variability between subjects, sessions and arm postures, by collecting data over 8 sessions of each of 7 able-bodied subjects executing 6 hand gestures in 4 arm postures. In the most recent studies, the variability is addressed with training strategies based on training set composition, which improve inter-posture and inter-day generalization of classical (i.e. non-deep) machine learning classifiers, among which the RBF-kernel SVM yields the highest accuracy. The deep architecture realized in this work is a 1d-CNN implemented in Pytorch, inspired by a 2d-CNN reported to perform well on other public benchmark databases. On this 1d-CNN, various training strategies based on training set composition were implemented and tested. Multi-session training proves to yield higher inter-session validation accuracies than single-session training. Two-posture training proves to be the best postural training (proving the benefit of training on more than one posture), and yields 81.2% inter-posture test accuracy. Five-day training proves to be the best multi-day training, and yields 75.9% inter-day test accuracy. All results are close to the baseline. Moreover, the results of multi-day trainings highlight the phenomenon of user adaptation, indicating that training should also prioritize recent data. Though not better than the baseline, the achieved classification accuracies rightfully place the 1d-CNN among the candidates for further research

    TCN Mapping Optimization for Ultra-Low Power Time-Series Edge Inference

    Get PDF
    Temporal Convolutional Networks (TCNs) are emerging lightweight Deep Learning models for Time Series analysis. We introduce an automated exploration approach and a library of optimized kernels to map TCNs on Parallel Ultra-Low Power (PULP) microcontrollers. Our approach minimizes latency and energy by exploiting a layer tiling optimizer to jointly find the tiling dimensions and select among alternative implementations of the causal and dilated 1D-convolution operations at the core of TCNs. We benchmark our approach on a commercial PULP device, achieving up to 103X lower latency and 20.3X lower energy than the Cube-AI toolkit executed on the STM32L4 and from 2.9X to 26.6X lower energy compared to commercial closed-source and academic open-source approaches on the same hardware target

    Low-latency detection of epileptic seizures from IEEG with temporal convolutional networks on a low-power parallel MCU

    No full text
    Epilepsy is a severe neurological disorder that affects about 1% of the world population, and one-third of cases are drug-resistant. Apart from surgery, drug-resistant patients can benefit from closed-loop brain stimulation, eliminating or mitigating the epileptic symptoms. For the closed-loop to be accurate and safe, it is paramount to couple stimulation with a detection system able to recognize seizure onset with high sensitivity and specificity and short latency, while meeting the strict computation and energy constraints of always-on real-time monitoring platforms. We propose a novel setup for iEEG-based epilepsy detection, exploiting a Temporal Convolutional Network (TCN) optimized for deployability on low-power edge devices for real-time monitoring. We test our approach on the Short-Term SWEC-ETHZ iEEG Database, containing a total of 100 epileptic seizures from 16 patients (from 2 to 14 per patient) comparing it with the state-of-the-art (SoA) approach, represented by Hyperdimensional Computing (HD). Our TCN attains a detection delay which is 10 s better than SoA, without performance drop in sensitivity and specificity. Contrary to previous literature, we also enforce a time-consistent setup, where training seizures always precede testing seizures chronologically. When deployed on a commercial low-power parallel microcontroller unit (MCU), each inference with our model has a latency of only 5.68 ms and an energy cost of only 124.5 µJ if executed on 1 core, and latency 1.46 ms and an energy cost 51.2 µJ if parallelized on 8 cores. These latency and energy consumption, lower than the current SoA, demonstrates the suitability of our solution for real-time long-term embedded epilepsy monitoring

    Temporal Variability Analysis in sEMG Hand Grasp Recognition using Temporal Convolutional Networks

    No full text
    Hand movement recognition via surface electromyographic (sEMG) signal is a promising approach for the advance in Human-Computer Interaction. However, this field has to deal with two main issues: (1) the long-term reliability of sEMG-based control is limited by the variability affecting the sEMG signal (especially, variability over time); (2) the classification algorithms need to be suitable for implementation on embedded devices, which have strict constraints in terms of power budget and computational resources. Current solutions present a performance over-time drop that makes them unsuitable for reliable gesture controller design. In this paper, we address temporal variability of sEMG-based grasp recognition, proposing a new approach based on Temporal Convolutional Networks, a class of deep learning algorithms particularly suited for time series analysis and temporal pattern recognition. Our approach improves by 7.6% the best results achieved in the literature on the NinaPro DB6, a reference dataset for temporal variability analysis of sEMG. Moreover, when targeting the much more challenging inter-session accuracy objective, our method achieves an accuracy drop of just 4.8% between intra- and inter-session validation. This proves the suitability of our setup for a robust, reliable long-term implementation. Furthermore, we distill the network using deep network quantization and pruning techniques, demonstrating that our approach can use down to 120 lower memory footprint than the initial network and 4 lower memory footprint than a baseline Support Vector Machine, with an inter-session accuracy degradation of only 2.5%, proving that the solution is suitable for embedded resource-constrained implementations

    Motor-Unit Ordering of Blindly-Separated Surface-EMG Signals for Gesture Recognition

    Full text link
    Hand gestures are one of the most natural and expressive way for humans to convey information, and thus hand gesture recognition has become a research hotspot in the human-machine interface (HMI) field. In particular, biological signals such as surface electromyography (sEMG) can be used to recognize hand gestures to implement intuitive control systems, but the decoding from the sEMG signal to actual control signals is non-trivial. Blind source separation (BSS)-based methods, such as convolutive independent component analysis (ICA), can be used to decompose the sEMG signal into its fundamental elements, the motor unit action potential trains (MUAPTs), which can then be processed with a classifier to predict hand gestures. However, ICA does not guarantee a consistent ordering of the extracted motor units (MUs), which poses a problem when considering multiple recording sessions and subjects; therefore, in this work we propose and validate three approaches to address this variability: two ordering criteria based on firing rate and negative entropy, and a re-calibration procedure, which allows the decomposition model to retain information about previous recording sessions when decomposing new data. In particular, we show that re-calibration is the most robust approach, yielding an accuracy up to 99.4%, and always greater than 85% across all the different scenarios that we tested. These results prove that our proposed system, which we publish open-source and which is based on biologically plausible features rather than on data-driven, black-box models, is capable of robust generalization

    Tackling time-variability in semg-based gesture recognition with on-device incremental learning and temporal convolutional networks

    No full text
    Human-machine interaction is showing promising results for robotic prosthesis control and rehabilitation. In these fields, hand movement recognition via surface electromyographic (sEMG) signals is one of the most promising approaches. However, it still suffers from the issue of sEMG signal's variability over time, which negatively impacts classification robustness. In particular, the non-stationarity of input signals and the surface electrodes' shift can cause up to 30% degradation in gesture recognition accuracy. This work addresses the temporal variability of the sEMG-based gesture recognition by proposing to train a Temporal Convolutional Network (TCN) incrementally over multiple gesture training sessions. Using incremental learning, we re-train our model on stored latent data spanning multiple sessions. We validate our approach on the UniBo-20-Session dataset, which includes 8 hand gestures from 3 subjects. Our incremental learning framework obtains 18.9% higher accuracy compared to a baseline with a standard single training session. Deploying our TCN on a Parallel, Ultra-Low Power (PULP) microcontroller unit (MCU), GAP8, we achieve an inference latency and energy of 12.9 ms and 0.66 mJ, respectively, with a weight memory footprint of 427 kB and a data memory footprint of 0.5-32 MB
    corecore